loadModel
fun loadModel(modelType: OnnxModelType<*>, vararg executionProviders: ExecutionProvider = arrayOf(ExecutionProvider.CPU())): OnnxInferenceModel
Content copied to clipboard
Loads ONNX model from android resources. By default, the model is initialized with ExecutionProvider.CPU execution provider.
Parameters
modelType
model type from ONNXModels
executionProviders
execution providers for model initialization.
open override fun <T : InferenceModel, U : InferenceModel> loadModel(modelType: ModelType<T, U>, loadingMode: LoadingMode): T
Content copied to clipboard
It's equivalent to loadModel with ExecutionProvider.CPU execution provider.
Parameters
modelType
model type from ONNXModels
loadingMode
it's ignored